Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][Docs] Getting Started With TVM Tutorial #7612

Closed
wants to merge 9 commits into from
Closed

[WIP][Docs] Getting Started With TVM Tutorial #7612

wants to merge 9 commits into from

Conversation

hogepodge
Copy link
Contributor

This patch refactors a number of tutorials, and adds additional documents, to create a Getting Started with TVM Tutorial. The aim of the tutorial is to guide a new user of TVM through the features and architecture of the project. It starts with a basic overview of TVM, followed by some guidance on how to install TVM.

It then works through examples of how to optimize a ResNet-50v2 models, using TVMC and the Python interface. Having given a high level example of what TVM is capable of, it then takes a bottom-up approach of describing TE using vector addition and matrix multiplication, then continues to build on the matrix multiplication example to describe templates, AutoTVM, and Auto Scheduling. Finally, some additional information about targeting GPUs and RPCs is described.

################################################################################
# Installing from TLC Pack
# ------------------------
# TVM is packaged and distributed as part of the volunteer TLCPack community.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show the pip install command for tlcpack here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I didn't insert this was because there are so many different strings to choose from, and pip nightly isn't available yet (which until we have a 0.8 release would be the preferred way to do this). I think that adding it once our TLCPack story has stabilized is the right thing to do.

# - Ability to allow the user to mix the two programming styles
#
# Relay (or more detailedly, its fusion pass) is in charge of splitting the
# neural network into small subgraphs, each of which is a task.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You introduce task here. Maybe define it?

tutorials/get_started/introduction.py Outdated Show resolved Hide resolved
# 3. Lower to *TE*, tensor expressions that define the *computational
# operations* of the neural network.
# Upon completing the import and high level optimizations, the next step it
# to decide how to implement the Relay representation to a hardware target.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think implement is the correct word here. Lower is what you want, but you define it later on. Maybe 'convert'?

tutorials/get_started/introduction.py Outdated Show resolved Hide resolved
tutorials/get_started/tune_matmul_x86.py Outdated Show resolved Hide resolved
tutorials/get_started/tune_matmul_x86.py Outdated Show resolved Hide resolved
tutorials/get_started/tvmc_command_line_driver.py Outdated Show resolved Hide resolved
tutorials/get_started/tvmc_command_line_driver.py Outdated Show resolved Hide resolved
tutorials/get_started/tvmc_command_line_driver.py Outdated Show resolved Hide resolved
# The diagram below illustrates the steps a machine model takes as it is
# transformed with the TVM optimizing compiler framework.
#
# .. image:: /_static/img/tvm.png
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Chris, please send the binary image to a separate repo(e.g. https://github.com/tlc-pack/web-data) and refer to it using a https link.

We cannot checkin binary data(images) into the code repo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to make all of the changes based on suggestions, then will send a new collection of patch-sets up.

@tqchen
Copy link
Member

tqchen commented Mar 10, 2021

Thanks @hogepodge , it would be great to send changes of each tutorial file(.py) as separate PRs, so they are easier to review, and we can merge each one incrementally.

@tqchen tqchen assigned tqchen and unassigned tqchen Mar 10, 2021
Copy link
Contributor

@areusch areusch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah there is definitely a lot here but generally I like the direction this is going. I didn't read all of the tutorials, but here's a few comments. is it possible to provide a render of the tutorials on a GH repo somewhere (or include with this as a temporary patch)?

tutorials/get_started/introduction.py Outdated Show resolved Hide resolved
# - Ability to allow the user to mix the two programming styles
#
# Relay applies several high-level optimization to the model, after which
# is runs the Relay Fusion Pass. To aid in the process of converting to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the Relay Fusion pass?

tutorials/get_started/tensor_expr_get_started.py Outdated Show resolved Hide resolved
# Change this target to the correct backend for you gpu. For example: cuda (NVIDIA GPUs), rocm (Radeon GPUS), OpenGL (???).
tgt_gpu = "cuda"

# Recreate the schedule
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: indent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the four spaces? I've wrapped this in an "if" block to make it optional. I think this part needs more work, but I'd like to take it up on a larger extension that talks about targeting GPUs

# In our example, the device code is stored in ptx, as well as a meta
# data json file. They can be loaded and linked separately via import.
#
# The CPU (host) module is directly saved as a shared library (.so). There
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess i'd also generally support a split of tutorials at each point where we could save a meaningful artifact to disk. e.g. after Relay import, after tvm.relay.build.

@hogepodge
Copy link
Contributor Author

New patch set for introduction and installation: #7638

@hogepodge
Copy link
Contributor Author

New patch set for tvmc #7640

@hogepodge
Copy link
Contributor Author

Auto Tuning with Python #7641
Tensor Expressions #7642
AutoTVM and matmul #7643
Auto Scheduler and matmul #7644

@hogepodge
Copy link
Contributor Author

Closed as this work is covered in the PRs indicated above.

@hogepodge hogepodge closed this Mar 17, 2021
@hogepodge hogepodge deleted the tutorial branch June 24, 2021 23:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants